9,670 research outputs found

    Results of TC-1 boost pump icing tests in the space power facility

    Get PDF
    A series of tests were conducted in the space power facility to investigate the failure of the Centaur oxidizer boost pump during the Titan/Centaur proof flight February 11, 1974. The three basic objectives of the tests were: (1) demonstrate if an evaporative freezing type failure mechanism could have prevented the pump from operating, (2) determine if steam from the exhaust of one of the attitude control engine could have entered a pump seal cavity and caused the failure, and (3) obtain data on the heating effects of the exhaust plume from a hydrogen peroxide attitude control engine

    The population of hot subdwarf stars studied with Gaia II. The Gaia DR2 catalogue of hot subluminous stars

    Full text link
    Based on data from the ESA Gaia Data Release 2 (DR2) and several ground-based, multi-band photometry surveys we compiled an all-sky catalogue of 3980039\,800 hot subluminous star candidates selected in Gaia DR2 by means of colour, absolute magnitude and reduced proper motion cuts. We expect the majority of the candidates to be hot subdwarf stars of spectral type B and O, followed by blue horizontal branch stars of late B-type (HBB), hot post-AGB stars, and central stars of planetary nebulae. The contamination by cooler stars should be about 10%10\%. The catalogue is magnitude limited to Gaia G<19magG<19\,{\rm mag} and covers the whole sky. Except within the Galactic plane and LMC/SMC regions, we expect the catalogue to be almost complete up to about 1.5kpc1.5\,{\rm kpc}. The main purpose of this catalogue is to serve as input target list for the large-scale photometric and spectroscopic surveys which are ongoing or scheduled to start in the coming years. In the long run, securing a statistically significant sample of spectroscopically confirmed hot subluminous stars is key to advance towards a more detailed understanding of the latest stages of stellar evolution for single and binary stars.Comment: 13 pages, A&A, accepte

    Resummation of perturbation series and reducibility for Bryuno skew-product flows

    Full text link
    We consider skew-product systems on T^d x SL(2,R) for Bryuno base flows close to constant coefficients, depending on a parameter, in any dimension d, and we prove reducibility for a large measure set of values of the parameter. The proof is based on a resummation procedure of the formal power series for the conjugation, and uses techniques of renormalisation group in quantum field theory.Comment: 30 pages, 12 figure

    Advancements and challenges in direct loss-based seismic design

    Get PDF
    Provisions in seismic design codes generally focus on collapse prevention or life safety for major, rare earthquakes while damage prevention for minor, frequent ones. The evolution of theoretical knowledge, modelling abilities, and actual damage observations led to higher awareness of the implications of code provisions on earthquake risk. The economic consequences of the 1994 Northridge (USA) earthquake symbolically led to a paradigm shift in evaluating structural performance. This led to performance-based earthquake engineering, now considered a standard for risk/loss assessment. Different research efforts improved the common force-based design to include risk-related concepts within the design process, such as: methods targeting displacement, seismic fragility, mean annual frequency of exceeding a given damage state, losses, resilience metrics. This paper focuses on the recently developed direct loss-based design (DLBD), which allows designing structures that achieve a given loss-related metric under the relevant site-specific seismic hazard virtually without design iterations (generally less than three). After describing the design methodology, this paper discusses: 1) the accuracy of the procedure for the design of new reinforced concrete buildings -monolithic or base isolated- and the retrofit of existing ones; 2) the necessary validation studies needed to maximise the scope of DLBD; 3) the methodological advancements needed to improve the accuracy of the embedded loss-estimation method; 4) the operational advances to render DLBD appealing in the practice

    Direct loss-based seismic design: state of the art and current challenges

    Get PDF
    Seismic design codes typically aim to prevent collapses or ensure safety during major, infrequent earthquakes, while minimizing damage during minor, frequent ones. However, advancements in theoretical knowledge, modeling capabilities, and observed damage have increased awareness of the impact of these codes on earthquake risk. The 1994 Northridge earthquake in the US caused significant economic consequences, prompting a paradigm shift towards performance-based earthquake engineering for risk and loss assessment. Several research efforts suggest replacing traditional force-based design with methods targeting displacements, seismic fragility, mean annual frequency of exceeding a damage state, losses, and resilience metrics. This paper focuses on direct loss-based design (DLBD), a newly developed method that enables designing structures to achieve a desired loss-related metric under site-specific seismic hazard virtually without design iterations. The paper explores the effectiveness of DLBD for designing new reinforced concrete buildings -monolithic or base-isolated- or retrofitting existing ones, the validation studies required to expand its scope, improvements needed for more accurate loss-estimation methods, and operational advances to make DLBD appealing in the practice

    Localized exciton-polariton modes in dye-doped nanospheres: a quantum approach

    Get PDF
    We model a dye-doped polymeric nanosphere as an ensemble of quantum emitters and use it to investigate the localized exciton-polaritons supported by such a nanosphere. By determining the time evolution of the density matrix of the collective system, we explore how an incident laser field may cause transient optical field enhancement close to the surface of such nanoparticles. Our results provide further evidence that excitonic materials can be used to good effect in nanophotonics.Comment: 16 pages, 4 figure

    Power computation for the triboelectric nanogenerator

    Full text link
    We consider, from a mathematical perspective, the power generated by a contact-mode triboelectric nanogenerator, an energy harvesting device that has been well studied recently. We encapsulate the behaviour of the device in a differential equation, which although linear and of first order, has periodic coefficients, leading to some interesting mathematical problems. In studying these, we derive approximate forms for the mean power generated and the current waveforms, and describe a procedure for computing the Fourier coefficients for the current, enabling us to show how the power is distributed over the harmonics. Comparisons with accurate numerics validate our analysis

    Simplicity versus accuracy trade-off in estimating seismic fragility of existing reinforced concrete buildings

    Get PDF
    This paper investigates the trade-off between simplicity (modelling effort and computational time) and result accuracy in seismic fragility analysis of reinforced concrete (RC) frames. For many applications, simplified methods focusing on “archetype” structural models are often the state-of-practice. These simplified approaches may provide a rapid-yet-accurate estimation of seismic fragility, requiring a relatively small amount of input data and computational resources. However, such approaches often fail to capture specific structural deficiencies and/or failure mechanisms that might significantly affect the final assessment outcomes (e.g. shear failure in beam-column joints, in-plane and out-of-plane failure of infill walls, among others). To overcome these shortcomings, the alternative response analysis methods considered in this paper are all characterised by a mechanics-based approach and the explicit consideration of record-to-record variability in modelling seismic input/demands. Specifically, this paper compares three different seismic response analysis approaches, each characterised by a different refinement: 1) low refinement - non-linear static analysis (either analytical SLaMA or pushover analysis), coupled with the capacity spectrum method; 2) medium refinement - non-linear time-history analysis of equivalent single degree of freedom (SDoF) systems calibrated based on either the SLaMA-based or the pushover-based force-displacement curves; 3) high refinement - non-linear time-history analysis of multi-degree of freedom (MDoF) numerical models. In all cases, fragility curves are derived through a cloud-based approach employing unscaled real (i.e. recorded) ground motions. 14 four- or eight-storey RC frames showing different plastic mechanisms and distribution of the infills are analysed using each method. The results show that non-linear time-history analysis of equivalent SDoF systems is not substantially superior with respect to a non-linear static analysis coupled with the capacity spectrum method. The estimated median fragility (for different damage states) of the simplified methods generally falls within ±20% (generally as an under-estimation) of the corresponding estimates from the MDoF non-linear time-history analysis, with slightly-higher errors for the uniformly-infilled frames. In this latter cases, such error range increases up to ±32%. The fragility dispersion is generally over-estimated up to 30%. Although such bias levels are generally non-negligible, their rigorous characterisation can potentially guide an analyst to select/use a specific fragility derivation approach, depending on their needs and context, or to calibrate appropriate correction factors for the more simplified methods
    corecore